1,148 research outputs found

    Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR

    Get PDF
    Provisions in many data protection laws require a legal basis, or at the very least safeguards, for significant, solely automated decisions; Article 22 of the GDPR is the most notable. - Little attention has been paid to Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated, and which together might impact upon decision subjects in different ways. - Using stylised examples grounded in real-world systems, we raise five distinct complications relating to interpreting Article 22 in the context of such multi-stage profiling systems. - These are: the potential for selective automation on subsets of data subjects despite generally adequate human input; the ambiguity around where to locate the decision itself; whether 'significance' should be interpreted in terms of any potential effects or only selectively in terms of realised effects; the potential for upstream automation processes to foreclose downstream outcomes despite human input; and that a focus on the final step may distract from the status and importance of upstream processes. - We argue that the nature of these challenges will make it difficult for courts or regulators to distil a set of clear, fair and consistent interpretations for many realistic contexts

    When Data Protection by Design and Data Subject Rights Clash

    Get PDF
    • Data Protection by Design (DPbD), a holistic approach to embedding principles in technical and organisational measures undertaken by data controllers, building on the notion of Privacy by Design, is now a qualified duty in the GDPR. • Practitioners have seen DPbD less holistically, instead framing it through the confidentiality-focussed lens of Privacy Enhancing Technologies (PETs). • While focussing primarily on confidentiality risk, we show that some DPbD strategies deployed by large data controllers result in personal data which, despite remaining clearly reidentifiable by a capable adversary, make it difficult for the controller to grant data subjects rights (eg access, erasure, objection) over for the purposes of managing this risk. • Informed by case studies of Apple’s Siri voice assistant and Transport for London’s Wi-Fi analytics, we suggest three main ways to make deployed DPbD more accountable and data subject–centric: building parallel systems to fulfil rights, including dealing with volunteered data; making inevitable trade-offs more explicit and transparent through Data Protection Impact Assessments; and through ex ante and ex post information rights (arts 13–15), which we argue may require the provision of information concerning DPbD trade-offs. • Despite steep technical hurdles, we call both for researchers in PETs to develop rigorous techniques to balance privacy-as-control with privacyas-confidentiality, and for DPAs to consider tailoring guidance and future frameworks to better oversee the trade-offs being made by primarily wellintentioned data controllers employing DPbD

    The Need for Sensemaking in Networked Privacy and Algorithmic Responsibility

    Get PDF
    This paper proposes that two significant and emerging problems facing our connected, data-driven society may be more effectively solved by being framed as sensemaking challenges. The first is in empowering individuals to take control of their privacy, in device-rich information environments where personal information is fed transparently to complex networks of information brokers. Although sensemaking is often framed as an analytical activity undertaken by experts, due to the fact that non-specialist end-users are now being forced to make expert-like decisions in complex information environments, we argue that it is both appropriate and important to consider sensemaking challenges in this context. The second is in supporting human-in-the-loop algorithmic decision-making, in which important decisions bringing direct consequences for individuals, or indirect consequences for groups, are made with the support of data-driven algorithmic systems. In both privacy and algorithmic decision-making, framing the problems as sensemaking challenges acknowledges complex and illdefined problem structures, and affords the opportunity to view these activities as both building up relevant expertise schemas over time, and being driven potentially by recognition-primed decision making

    Detecting Sarcasm in Multimodal Social Platforms

    Full text link
    Sarcasm is a peculiar form of sentiment expression, where the surface sentiment differs from the implied sentiment. The detection of sarcasm in social media platforms has been applied in the past mainly to textual utterances where lexical indicators (such as interjections and intensifiers), linguistic markers, and contextual information (such as user profiles, or past conversations) were used to detect the sarcastic tone. However, modern social media platforms allow to create multimodal messages where audiovisual content is integrated with the text, making the analysis of a mode in isolation partial. In our work, we first study the relationship between the textual and visual aspects in multimodal posts from three major social media platforms, i.e., Instagram, Tumblr and Twitter, and we run a crowdsourcing task to quantify the extent to which images are perceived as necessary by human annotators. Moreover, we propose two different computational frameworks to detect sarcasm that integrate the textual and visual modalities. The first approach exploits visual semantics trained on an external dataset, and concatenates the semantics features with state-of-the-art textual features. The second method adapts a visual neural network initialized with parameters trained on ImageNet to multimodal sarcastic posts. Results show the positive effect of combining modalities for the detection of sarcasm across platforms and methods.Comment: 10 pages, 3 figures, final version published in the Proceedings of ACM Multimedia 201

    'It's Reducing a Human Being to a Percentage'; Perceptions of Procedural Justice in Algorithmic Decisions

    Get PDF
    Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to ‘meaningful information about the logic’ behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles—under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best’ approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions
    • …
    corecore